Conditional convergence of photorefractive perceptron learning.

نویسندگان

  • K Y Hsu
  • S H Lin
  • P Yeh
چکیده

We consider the convergence characteristics of a perceptron learning algorithm, taking into account the decay of photorefractive holograms during the process of interconnection weight changes. As a result of the hologram erasure, the convergence of the learning process is dependent on the exposure time during the weight changes. A mathematical proof of the conditional convergence, perceptrons, is presented and discussed. as well as computer simulations of the photorefractive

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning with Lookahead: Can History-Based Models Rival Globally Optimized Models?

This paper shows that the performance of history-based models can be significantly improved by performing lookahead in the state space when making each classification decision. Instead of simply using the best action output by the classifier, we determine the best action by looking into possible sequences of future actions and evaluating the final states realized by those action sequences. We p...

متن کامل

On the convergence speed of artificial neural networks in‎ ‎the solving of linear ‎systems

‎Artificial neural networks have the advantages such as learning, ‎adaptation‎, ‎fault-tolerance‎, ‎parallelism and generalization‎. ‎This ‎paper is a scrutiny on the application of diverse learning methods‎ ‎in speed of convergence in neural networks‎. ‎For this aim‎, ‎first we ‎introduce a perceptron method based on artificial neural networks‎ ‎which has been applied for solving a non-singula...

متن کامل

Towards Shockingly Easy Structured Classification: A Search-based Probabilistic Online Learning Framework

There are two major approaches for structured classification. One is the probabilistic gradient-based methods such as conditional random fields (CRF), which has high accuracy but with drawbacks: slow training, and no support of search-based optimization (which is important in many cases). The other one is the search-based learning methods such as perceptrons and margin infused relaxed algorithm...

متن کامل

Conditional Distribution Learning with Neural Networks and its Application to Channel Equalization - Signal Processing, IEEE Transactions on

We present a conditional distribution learning formulation for real-time signal processing with neural networks based on a recent extension of maximum likelihood theory—partial likelihood (PL) estimation—which allows for i) dependent observations and ii) sequential processing. For a general neural network conditional distribution model, we establish a fundamental information-theoretic connectio...

متن کامل

Minibatch and Parallelization for Online Large Margin Structured Learning

Online learning algorithms such as perceptron and MIRA have become popular for many NLP tasks thanks to their simpler architecture and faster convergence over batch learning methods. However, while batch learning such as CRF is easily parallelizable, online learning is much harder to parallelize: previous efforts often witness a decrease in the converged accuracy, and the speedup is typically v...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Optics letters

دوره 18 24  شماره 

صفحات  -

تاریخ انتشار 1993